96 research outputs found

    Data Challenges in High-Performance Risk Analytics

    Full text link
    Risk Analytics is important to quantify, manage and analyse risks from the manufacturing to the financial setting. In this paper, the data challenges in the three stages of the high-performance risk analytics pipeline, namely risk modelling, portfolio risk management and dynamic financial analysis is presented

    Parallel Simulations for Analysing Portfolios of Catastrophic Event Risk

    Full text link
    At the heart of the analytical pipeline of a modern quantitative insurance/reinsurance company is a stochastic simulation technique for portfolio risk analysis and pricing process referred to as Aggregate Analysis. Support for the computation of risk measures including Probable Maximum Loss (PML) and the Tail Value at Risk (TVAR) for a variety of types of complex property catastrophe insurance contracts including Cat eXcess of Loss (XL), or Per-Occurrence XL, and Aggregate XL, and contracts that combine these measures is obtained in Aggregate Analysis. In this paper, we explore parallel methods for aggregate risk analysis. A parallel aggregate risk analysis algorithm and an engine based on the algorithm is proposed. This engine is implemented in C and OpenMP for multi-core CPUs and in C and CUDA for many-core GPUs. Performance analysis of the algorithm indicates that GPUs offer an alternative HPC solution for aggregate risk analysis that is cost effective. The optimised algorithm on the GPU performs a 1 million trial aggregate simulation with 1000 catastrophic events per trial on a typical exposure set and contract structure in just over 20 seconds which is approximately 15x times faster than the sequential counterpart. This can sufficiently support the real-time pricing scenario in which an underwriter analyses different contractual terms and pricing while discussing a deal with a client over the phone.Comment: Proceedings of the Workshop at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), 2012, 8 page

    QuPARA: Query-Driven Large-Scale Portfolio Aggregate Risk Analysis on MapReduce

    Full text link
    Stochastic simulation techniques are used for portfolio risk analysis. Risk portfolios may consist of thousands of reinsurance contracts covering millions of insured locations. To quantify risk each portfolio must be evaluated in up to a million simulation trials, each capturing a different possible sequence of catastrophic events over the course of a contractual year. In this paper, we explore the design of a flexible framework for portfolio risk analysis that facilitates answering a rich variety of catastrophic risk queries. Rather than aggregating simulation data in order to produce a small set of high-level risk metrics efficiently (as is often done in production risk management systems), the focus here is on allowing the user to pose queries on unaggregated or partially aggregated data. The goal is to provide a flexible framework that can be used by analysts to answer a wide variety of unanticipated but natural ad hoc queries. Such detailed queries can help actuaries or underwriters to better understand the multiple dimensions (e.g., spatial correlation, seasonality, peril features, construction features, and financial terms) that can impact portfolio risk. We implemented a prototype system, called QuPARA (Query-Driven Large-Scale Portfolio Aggregate Risk Analysis), using Hadoop, which is Apache's implementation of the MapReduce paradigm. This allows the user to take advantage of large parallel compute servers in order to answer ad hoc risk analysis queries efficiently even on very large data sets typically encountered in practice. We describe the design and implementation of QuPARA and present experimental results that demonstrate its feasibility. A full portfolio risk analysis run consisting of a 1,000,000 trial simulation, with 1,000 events per trial, and 3,200 risk transfer contracts can be completed on a 16-node Hadoop cluster in just over 20 minutes.Comment: 9 pages, IEEE International Conference on Big Data (BigData), Santa Clara, USA, 201

    Scalable parallel geometric algorithms for coarse grained multicomputers

    Get PDF
    Whereas most of the literature assumes that the number of processors p is a function of the problem size n, in scalable algorithms p becomes a parameter of the time complexity. This is a more realistic modelisation of real parallel machines and yields optimal algorithms, for the case that n H p, where H is a function depending on the architecture of the interconnexion network. In this paper we present scalable algorithms for a number of geometric problems, namely lower envelope of line segments, 2D-nearest neighbour, 3D-maxima, 2D-weighted dominance counting area of the union of rectangles, 2D-convex hull. The main idea of these algorithms is to decompose the problem in p subproblems of size 0(F(n;p) + f(p)), with f(p) 2 F(n;p) , which can be solved independently using optimal sequential algorithms. For each problem we present a spatial decomposition scheme based on some geometric observations. The decomposition schemes have in common that they can be computed by globally sorting the entire data set at most twice. The data redundancy of f(p) duplicates of data elements per processor does not increase the asymptotic time complexity and ranges for the algorithms presented in this paper, from p to p2. The algorithms do not depend on a specific architecture,they are easy to implement and in practice efficient as experiments show

    Optical clustering on a mesh-connected computer

    Get PDF
    • …
    corecore